104 research outputs found
Recommended from our members
An architecture for the automated detection of textual indicators of reflection
Manual annotation of evidence of reflection expressed in texts is time consuming, especially as fine-grained models of reflection require extensive training of coders, otherwise resulting in low inter-coder reliability. Automated reflection detection provides a solution to this problem. Within this paper, a new basic architecture for detecting evidence of reflection is proposed that allows for automated marking up of written accounts of certain, observable elements of reflection. Furthermore, three promising example annotators of elements of reflection are identified, implemented, and demonstrated: detecting reflective keywords, premise and conclusions of arguments, and questions. It appears that automated detection of reflections bears the potential to support learning with technology at least on three levels: it can foster creating awareness of the reflectivity of own writings, it can help in becoming aware of reflective writings of others, and it can make visible reflective writings of learning networks as a whole
Recommended from our members
Keywords of written reflection - a comparison between reflective and descriptive datasets
This study investigates reflection keywords by contrasting two datasets, one of reflective sentences and another of descriptive sentences. The log-likelihood statistic reveals several reflection keywords that are discussed in the context of a model for reflective writing. These keywords are seen as a useful building block for tools that can automatically analyse reflection in texts
Comparing automatically detected reflective texts with human judgements
This paper reports on the descriptive results of an experiment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing assessment and their operationalisation five elements of reflection were defined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or notreflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analysis indicates the expected difference between reflective and not reflective texts
Recommended from our members
Automated Analysis of Reflection in Writing: Validating Machine Learning Approaches
Reflective writing is an important educational practice to train reflective thinking. Currently, researchers must manually analyze these writings, limiting practice and research because the analysis is time and resource consuming. This study evaluates whether machine learning can be used to automate this manual analysis. The study investigates eight categories that are often used in models to assess reflective writing, and the evaluation is based on 76 student essays (5,080 sentences) that are largely from third- and second-year health, business, and engineering students. To test the automated analysis of reflection in writings, machine learning models were built based on a random sample of 80% of the sentences. These models were then tested on the remaining 20% of the sentences. Overall, the standardized evaluation shows that five out of eight categories can be detected automatically with substantial or almost perfect reliability, while the other three categories can be detected with moderate reliability (Cohen's κ ranges between .53 and .85). The accuracies of the automated analysis were on average 10% lower than the accuracies of the manual analysis. These findings enable reflection analytics that is immediate and scalable
Reflective Writing Analytics - Empirically Determined Keywords of Written Reflection
Despite their importance for educational practice, reflective writings are still manually analysed and assessed, posing a constraint on the use of this educational technique. Recently, research started to investigate automated approaches for analysing reflective writing. Foundational to many automated approaches is the knowledge of words that are important for the genre. This research presents keywords that are specific to several categories of a reflective writing model. These keywords have been derived from eight datasets, which contain several thousand instances using the log-likelihood method. Both performance measures, the accuracy and the Cohen's κ, for these keywords were estimated with ten-fold cross validation. The results reached an accuracy of 0.78 on average for all eight categories and a fair to good inter-rater reliability for most categories even though it did not make use of any sophisticated rule-based mechanisms or machine learning approaches. This research contributes to the development of automated reflective writing analytics that are based on data-driven empirical foundations
Recommended from our members
Automated detection of reflection in texts. A machine learning based approach
Promoting reflective thinking is an important educational goal. A common educational practice is to provide opportunities for learners to express their reflective thoughts in writing. The analysis of such text with regard to reflection is mainly a manual task that employs the principles of content analysis.
Considering the amount of text produced by online learning systems, tools that automatically analyse text with regard to reflection would greatly benefit research and practice.
Previous research has explored the potential of dictionary-based approaches that automatically map keywords to categories associated with reflection. Other automated methods use manually constructed rules to gauge insight from text. Machine learning has shown potential for classifying text with regard to reflection-related constructs. However, not much is known of whether machine learning can be used to reliably analyse text with regard to the categories of reflective writing models.
This thesis investigates the reliability of machine learning algorithms to detect reflective thinking in text. In particular, it studies whether text segments from student writings can be analysed automatically to detect the presence (or absence) of reflective writing model categories.
A synthesis of the models of reflective writing is performed to determine the categories frequently used to analyse reflective writing. For each of these categories, several machine learning algorithms are evaluated with regard to their ability to reliably detect reflective writing categories.
The evaluation finds that many of the categories can be predicted reliably. The automated method, however, does not achieve the same level of reliability as humans do
Understanding Accessibility as a Process through the Analysis of Feedback from Disabled Students
Accessibility cannot be fully achieved through adherence to technical guidelines, and must include processes that take account of the diverse contexts and needs of individuals. A complex yet important aspect of this is to understand and utilise feedback from disabled users of systems and services. Open comment feedback can complement other practices in providing rich data from user perspectives, but this presents challenges for analysis at scale. In this paper, we analyse a large dataset of open comment feedback from disabled students on their online and distance learning experience, and we explore opportunities and challenges in the analysis of this data. This includes the automated and manual analysis of content and themes, and the integration of information about the respondent alongside their feedback. Our analysis suggests that procedural themes, such as changes to the individual over time, and their experiences of interpersonal interactions, provide key examples of areas where feedback can lead to insight for the improvement of accessibility. Reflecting on this analysis in the context of our institution, we provide recommendations on the analysis of feedback data, and how feedback can be better embedded into organisational processes
Recommended from our members
The SemSearchXplorer - exploring semantic search results with semantic visualizations
SemSearchXplorer is a toolkit for the exploration of semantic data. The goal is to lower user barriers to access information in semantic data repositories. Therefore SemSearchXplorer supports the user in three respects: (1) it supports querying of the semantic data with a keyword based approach, so the users do not need to learn a semantic query language, (2) it helps users find relevant results both by using semantic enriched information about the results and semantic filter options to narrow down the set of results, and (3) it provides information exploration capabilities through semantic visualizations recommended by the system. Filtering of semantic search results helps to narrow down the result set to a more manageable amount of information. Besides searching for relevant information, facilities for the exploration of the results help users to gain insight in the context of results. With several semantic visualizations, we try to help users making sense of the raw data. Based on the assumption that there is no single visualization that fits all exploration needs, SemSearchXplorer recommends visualizations based on the selected information of users
A Visualisation Dashboard for Contested Collective Intelligence. Learning Analytics to Improve Sensemaking of Group Discussion
The skill to take part in and to contribute to debates is important for informal and formal learning. Especially when addressing highly complex issues, it can be difficult to support learners participating in effective group discussion, and to stay abreast of all the information collectively generated during the discussion. Technology can help with the engagement and sensemaking of such large debates, for example, it can monitor how healthy a debate is and provide indicators of participation's distribution. A special framework that aims at harnessing the intelligence of - small to very large – groups with the support of structured discourse and argumentation tools is Contested Collective Intelligence (CCI). CCI tools provide a rich source of semantic data that, if appropriately processed, can generate powerful analytics of the online discourse. This study presents a visualisation dashboard with several visual analytics that show important aspects of online debates that have been facilitated by CCI discussion tools. The dashboard was designed to improve sensemaking and participation in online debates and has been evaluated with two studies, a lab experiment and a field study in the context of two Higher Education institutes. The paper reports findings of a usability evaluation of the visualisation dashboard. The descriptive findings suggest that participants with little experience in using analytics visualisations were able to perform well on given tasks. This constitutes a promising result for the application of such visualisation technologies as discourse-centric learning analytics interfaces can help to support learners' engagement and sensemaking of complex online debates.
In Spanish:
La habilidad para participar y contribuir en los debates es importante para el aprendizaje informal y formal. Especialmente cuando se abordan temas altamente complejos, puede ser difícil apoyar a los alumnos que participan en una discusión grupal efectiva y mantenerse al tanto de toda la información generada colectivamente durante la discusión. La tecnología puede ayudar con el compromiso y razonamiento en debates tan grandes, por ejemplo, puede monitorear cuán saludable es un debate y proporcionar indicadores sobre la distribución de la participación. Un marco especial que pretende aprovechar la inteligencia de grupos de pequeños a muy grandes con el apoyo de herramientas de discurso y argumentación estructuradas es la Inteligencia Colectiva Controvertida (CCI). Las herramientas de CCI proporcionan una fuente rica de datos semánticos que, si se procesan de manera adecuada, pueden generar un sofisticado análisis del discurso en línea. Este estudio presenta un panel de visualización con varios análisis visuales que muestran aspectos importantes de los debates en línea que han sido facilitados por las herramientas de discusión de CCI. El tablero de instrumentos fue diseñado para mejorar la creación de sentidos y la participación en los debates en línea y se ha evaluado con dos estudios, un experimento de laboratorio y un estudio de campo, en el contexto de dos institutos de educación superior. Este artículo informa sobre los resultados de una evaluación de usabilidad del panel de visualización. Los hallazgos descriptivos sugieren que los participantes con poca experiencia en el uso de visualizaciones analíticas pudieron desempeñarse bien en determinadas tareas. Esto constituye un resultado prometedor para la aplicación de tales tecnologías de visualización, ya que las interfaces analíticas de aprendizaje centradas en el discurso pueden ayudar a apoyar el compromiso de los alumnos y su razonamiento en debates en línea complejos
- …